Skip to main content

Setting general connection properties

This section describes how to configure general connection properties. For an explanation of how to configure advanced connection properties, see Setting advanced connection properties.

To add a Google Cloud Storage target endpoint to Qlik Replicate:

  1. In Tasks view, click Manage Endpoint Connections to open the Manage Endpoints Connections dialog box. Then click the New Endpoint Connection button. For more information on adding an endpoint to Qlik Replicate, see Defining and managing endpoints.
  2. In the Name field, type a name for your endpoint. This can be any name that will help to identify the endpoint being used.
  3. Optionally, in the Description field, type a description that helps to identify the endpoint.
  4. Select Target as the endpoint role.
  5. Select Google Cloud Storage as the endpoint Type.
  6. Configure the remaining settings in the General tab as described in the table below.
Google Cloud Storage option descriptions
Option Description

JSON credentials

The JSON credentials for the service account key with read and write access to the Google Cloud Storage bucket.

Bucket name

The Google Cloud Storage bucket.

Target folder

Where to create the data files in the specified bucket.

File Attributes

Delimiters can be standard characters or a hexadecimal (hex) value. Note that the "0x" prefix must be used to denote a hexadecimal delimiter (e.g. 0x01 = SOH). In the Field delimiter, Record delimiter and Null value fields, the delimiter can consist of concatenated hex values (e.g. 0x0102 = SOHSTX), whereas in the Quote character and Escape character fields, it can only be a single hex value.

Information note

The hexadecimal number 0x00 is not supported (i.e. only 0x01-0xFF are supported).

Format

You can choose to create the target files in CSV, JSON or Parquet format.

Information noteParquet format is supported from Replicate May 2022 Service Release 02 only.

In a JSON file, each record is represented by a single line, as in the following example:

{ "book_id": 123, "title": "Alice in Wonderland", "price": 6.99, "is_hardcover": false }

{ "book_id": 456, "title": "Winnie the Pooh", "price": 6.49, "is_hardcover": true }

{ "book_id": 789, "title": "The Cat in the Hat", "price": 7.23, "is_hardcover": true }

Information note

Changing the format (for example, from CSV to JSON or from JSON to CSV) while the task is in a stopped state and then resuming the task, is not supported.

Information note

If you choose JSON format , the following fields will be hidden as they are only relevant to CSV format: Field delimiter, Record delimiter, Null value, Quote character, Escape character, and Add metadata header.

For information about data type mappings when using Parquet format and limitations, see Mapping from Qlik Replicate data types to Parquet and Limitations and considerations.

Field delimiter

The delimiter that will be used to separate fields (columns) in the target files. The default is a comma.

Example using a comma as a delimiter:

"mike","male"

Record delimiter

The delimiter that will be used to separate records (rows) in the target files. The default is a newline (\n).

Example:

"mike","male"\n

"sara","female"\n

Null value

The string that will be used to indicate a null value in the target files.

Example (where \n is the record delimiter and @ is the null value):

"mike","male",295678\n

"sara","female",@\n

Quote character

The character that will be used at the beginning and end of a text column. The default is the double-quote character ("). When a column that contains column delimiters is enclosed in double-quotes, the column delimiter characters are interpreted as actual data, and not as column delimiters.

Example (where a @ is the quote character):

@mike@,@male@

Quote escape character

The character used to escape a quote character in the actual data. The default is the double-quote character (").

Example (where " is the quote character and \ is the escape character):

1955,"old, \"rare\", Chevrolet","$1000"

Add metadata header

When the target storage format is set to Text, you can optionally add a header row to the data files. The header row can contain the source column names and/or the intermediate (i.e. Replicate) data types.

Example of a target file with a header row when both With column names and With data types are selected:

Position:DECIMAL(38,0),Color:VARCHAR(10)

1,"BLUE"

2,"BROWN"

3,"RED"

...

Maximum file size

The maximum size a file can reach before it is closed (and optionally compressed). This value applies both to data files and to Reference Files.

For information on generating reference files, see Setting advanced connection properties.

Compress files using

Choose one of the compression options to compress the target files or NONE (the default) to leave them uncompressed. Note that the available compressions options are determined by the selected file format.

Change Processing option descriptions
Option Description

Apply/Store changes when file size reaches

Specify the maximum size of Change Data to accumulate before uploading the file to Google Cloud Storage .

Apply/Store changes when Elapsed time reaches

Specify the maximum time to wait before applying the changes.

Metadata files option descriptions
Option Description

Create metadata files in the target folder

When this option is selected, for each data file, a matching metadata file with a .dfm extension will be created under the specified target folder. The metadata files (which are in standard JSON format) provide additional information about the task/data such as the source endpoint type, the source table name, the number of records in the data file, and so on.

For a full description of the metadata file as well as possible uses, see Metadata file description .

  1. To determine if the connection information you entered is correct, click Test Connection. If the connection test is successful, click Save.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!